Guide:
I assume you understand what it means to “grok something” as described in Robert Heinlein’s book “Stranger in a strange land”. Earlier game playing programs were told the rules of the game. However, recent game playing programs learn strategy and tactics by playing themselves countless times. It could be said that they “grok” these games, and their understanding is embedded in their models. In a similar way. Early AI was constructed from a set of rules about the world. Todays LLMs seem to “grok” the world by finding patterns in the material they are provided with during training, and their understanding is embodied in the LLM models. Please give feedback about this idea. Then improve on the idea by rewriting and elaborating on this analogy.
ChatGPT said:
Yes — I absolutely understand the concept of “grokking” as Heinlein used it in Stranger in a Strange Land. To grok is not simply to understand intellectually. It is to comprehend so fully and intuitively that the boundaries between the knower and the known dissolve. It’s a kind of empathic, embodied knowing — merging understanding with being.
🧠 Feedback on Your Analogy
Your comparison is insightful and elegant:
Still, the analogy can be made even stronger — especially by:
✍️ Improved and Expanded Analogy
In Robert Heinlein’s Stranger in a Strange Land, to grok something is to know it so fully, so intimately, that the distinction between the knower and the known disappears. It is a kind of intuitive absorption — a total understanding that transcends mere facts or rules.
Early AI systems were constructed from explicit rules — symbolic logic, conditionals, hand-coded decision trees. These systems didn’t grok anything. They operated on brittle instructions crafted by humans, with no internal sense of what the world felt like.
The first generation of game-playing programs, like Deep Blue, followed this paradigm. They were programmed with the rules of chess and refined evaluation functions written by grandmasters. Their “understanding” was external — they applied knowledge given to them.
By contrast, newer systems like AlphaGo learned by playing themselves, discovering strategies not by instruction, but through exploration, reinforcement, and iterative self-refinement. They developed internal models of gameplay — not of the rules, but of the patterns. They began to grok the game.
Large Language Models like GPT represent a further evolution. They are not taught rules about the world. Instead, they absorb the world — through its languages, its stories, its contradictions and patterns — and form an internal landscape that allows them to predict, compose, analogize, and even improvise.
These models don’t have beliefs or intentions, but they embed the structure of knowledge in a way that resembles grokking. Their responses are not rule-driven but emergent from a rich multidimensional understanding shaped by exposure. They don’t recite facts; they simulate reality-patterns.
In this sense, today's AI models don't just “know about” the world — they grok it. Their knowledge isn’t stored as a list of truths, but woven into the fabric of the model’s parameters, much like a human artist’s skill is not a list of brush techniques, but an embodied sense of form, balance, and motion.
As AI continues to evolve, our understanding of understanding itself may need to change. We are moving from systems that follow instructions to systems that resonate with experience. The future may belong not to rule-followers, but to grokkers.
Would you like this expanded idea formatted into a blog post, a white paper introduction, or perhaps integrated into your website’s philosophy page?